Results

As described above, learning data for all participants were split into the first 6 stimulus iterations and the last 6 before model fitting. Additionally, the two conditions, set-size 3 and set-size 6 were also fit separately.

Overview of modelfitting results

Figure 1

Figure 1

We found that the LTM model still fit more subjects in both halves (first half- LTM: M = 66.5, RL: M = 16.5; second half- LTM: M = 61.5; RL: M = 21.5), and conditions (set-size 3 - LTM: M = 67, RL: M = 16; set-size 6 - LTM: M = 61, RL: M = 22) much like the results obtained through the model fitting procedure in experiment 1 (Figure 1). Furthermore, more subjects fit the LTM model in the set-size 3 condition compared to the set-size 6 condition (more in the first half than in the second half, for both), which means, for those subjects that fit the RL model, higher numbers of subjects fit RL model for set-size 6 conditions than set-size 3 (more in the second half than in the first half for both conditions). This trend aligns more with Collins (2018) findings but these results do not take into account individual dynamics (covered in detail below).

Overview of model-fitting quality

The plot below shows mean BIC value for the best fitting model for each of the halves and set-sizes.

Dynamics

Does the best fit model change from 1st half to second half and between the two set-sizes?

Next, we sought to track learning dynamics for each individual learner. In other words, we wanted to see if learners changed strategies in response to 1) their learning experience, by comparing model fits to the first half and second half of learning, 2) task demands, by comparing model fits to th two set-size conditions, and, 3) interactions between the two.

We found that over 81% of learners who fit the LTM model in the first half also fit that model in the second half of learning (set-size 3: 1200%; set-size 6: 1200%). In contrast, more than 50% of those subjects who were best fit by the RL model in the first half also fit the RL model in the second half (set-size 3: 600%; set-size 6: 800%), the rest shifted to LTM.

Patterns of model fits for the set-sizes were similar to the first-half - second-half fits above. More than 75% of subjects who fit the LTM model during set-size 3 trials also fit the LTM model in set-size 6, and these are largely the same across the first and second half of the task (half 1: 5400%; half 2: 4900%). In contrast, fewer numbers of subjects who fit the RL model for set-size 3 blocks also fit RL in the set-size 6 blocks; and these numbers differ between half 1 and half 2 (half 1: 400%; half 2: 900%)

How do the groups in Experiment 1 fare in Experiment 2?

Let us assume that there are high RL learners (most likely to fit RL), high LTM learners (most likely to fit LTM) and those in the middle that are likely to fit the combination models. If we fit the set-size 3 and 6 parts separately, how would these three different groups behave? What would we learn about their meta-cognition? We expect that the people in the extremes would use the same strategy for the two set-sizes, and, the people in the center would perhaps respond more to task demands and use LTM for s3 and RL for s6, as Collins predicts.

Group differences in performance (accuracy) for switchers vs non-switchers

term df sumsq meansq statistic p.value
stable 1 0.1494705 0.1494705 22.6675485 0.0000029
half 1 0.0023208 0.0023208 0.3519494 0.5534360
condition 1 0.2592686 0.2592686 39.3186879 0.0000000
model 1 0.3923497 0.3923497 59.5007614 0.0000000
stable:half 1 0.0448541 0.0448541 6.8022337 0.0095368
stable:condition 1 0.0624612 0.0624612 9.4723869 0.0022686
half:condition 1 0.0020298 0.0020298 0.3078181 0.5794146
stable:model 1 0.0007781 0.0007781 0.1180054 0.7314364
half:model 1 0.0053745 0.0053745 0.8150585 0.3673166
condition:model 1 0.1884138 0.1884138 28.5733940 0.0000002
stable:half:condition 1 0.0026140 0.0026140 0.3964152 0.5294014
stable:half:model 1 0.0093748 0.0093748 1.4217142 0.2340161
stable:condition:model 1 0.0387988 0.0387988 5.8839228 0.0158394
half:condition:model 1 0.0001240 0.0001240 0.0188042 0.8910167
stable:half:condition:model 1 0.0001290 0.0001290 0.0195575 0.8888690
Residuals 316 2.0837131 0.0065940 NA NA

term df sumsq meansq statistic p.value
stable 1 0.0185432 0.0185432 3.1381286 0.0774453
half 1 0.0023208 0.0023208 0.3927518 0.5313090
condition 1 0.2555172 0.2555172 43.2421367 0.0000000
model 1 0.5411171 0.5411171 91.5752982 0.0000000
stable:half 1 0.0514896 0.0514896 8.7137825 0.0033946
stable:condition 1 0.0089605 0.0089605 1.5164268 0.2190767
half:condition 1 0.0012936 0.0012936 0.2189290 0.6401797
stable:model 1 0.0650626 0.0650626 11.0107824 0.0010116
half:model 1 0.0075093 0.0075093 1.2708339 0.2604656
condition:model 1 0.2976787 0.2976787 50.3772889 0.0000000
stable:half:condition 1 0.0178245 0.0178245 3.0165133 0.0833950
stable:half:model 1 0.0087152 0.0087152 1.4749077 0.2254797
stable:condition:model 1 0.0763722 0.0763722 12.9247560 0.0003761
half:condition:model 1 0.0000000 0.0000000 0.0000015 0.9990233
stable:half:condition:model 1 0.0224305 0.0224305 3.7960057 0.0522600
Residuals 316 1.8672395 0.0059090 NA NA

Are the parameter values largely different for each half and set-size?

Individual plots